crop row
End-to-End Crop Row Navigation via LiDAR-Based Deep Reinforcement Learning
Mineiro, Ana Luiza, Affonso, Francisco, Becker, Marcelo
Abstract-- Reliable navigation in under-canopy agricultural environments remains a challenge due to GNSS unreliability, cluttered rows, and variable lighting. T o address these limitations, we present an end-to-end learning-based navigation system that maps raw 3D LiDAR data directly to control commands using a deep reinforcement learning policy trained entirely in simulation. Our method includes a voxel-based downsampling strategy that reduces LiDAR input size by 95.83%, enabling efficient policy learning without relying on labeled datasets or manually designed control interfaces. The policy was validated in simulation, achieving a 100% success rate in straight-row plantations and showing a gradual decline in performance as row curvature increased, tested across varying sinusoidal frequencies and amplitudes. Autonomous robots have seen significant growth in modern agriculture, particularly for under-canopy tasks such as plant phenotyping, crop row harvesting, and disease scouting. These applications require platforms that are not only compact and agile but also capable of accurately navigating between dense crop rows (Figure 1) [1]. However, reliable navigation in such environments remains an active area of research due to several challenges, including clutter and occlusions caused by narrow row spacing and the high visual variability introduced by different plant growth stages [2].
- North America > United States > Illinois > Champaign County > Urbana (0.04)
- Europe > Netherlands > South Holland > Delft (0.04)
CA-Cut: Crop-Aligned Cutout for Data Augmentation to Learn More Robust Under-Canopy Navigation
State-of-the-art visual under-canopy navigation methods are designed with deep learning-based perception models to distinguish traversable space from crop rows. While these models have demonstrated successful performance, they require large amounts of training data to ensure reliability in real-world field deployment. However, data collection is costly, demanding significant human resources for in-field sampling and annotation. To address this challenge, various data augmentation techniques are commonly employed during model training, such as color jittering, Gaussian blur, and horizontal flip, to diversify training data and enhance model robustness. In this paper, we hypothesize that utilizing only these augmentation techniques may lead to suboptimal performance, particularly in complex under-canopy environments with frequent occlusions, debris, and non-uniform spacing of crops. Instead, we propose a novel augmentation method, so-called Crop-Aligned Cutout (CA-Cut) which masks random regions out in input images that are spatially distributed around crop rows on the sides to encourage trained models to capture high-level contextual features even when fine-grained information is obstructed. Our extensive experiments with a public cornfield dataset demonstrate that masking-based augmentations are effective for simulating occlusions and significantly improving robustness in semantic keypoint predictions for visual navigation. In particular, we show that biasing the mask distribution toward crop rows in CA-Cut is critical for enhancing both prediction accuracy and generalizability across diverse environments achieving up to a 36.9% reduction in prediction error. In addition, we conduct ablation studies to determine the number of masks, the size of each mask, and the spatial distribution of masks to maximize overall performance.
- North America > United States > Indiana (0.04)
- North America > United States > Illinois (0.04)
- North America > United States > Georgia > Cobb County > Marietta (0.04)
Evaluating Path Planning Strategies for Efficient Nitrate Sampling in Crop Rows
Liu, Ruiji, Breitfeld, Abigail, Vijayarangan, Srinivasan, Kantor, George, Yandun, Francisco
Abstract: This paper presents a pipeline that combines high-resolution orthomosaic maps generated from UAS imagery with GPS-based global navigation to guide a skid-steered ground robot. We evaluated three path planning strategies: A* Graph search, Deep Q-learning (DQN) model, and Heuristic search, benchmarking them on planning time and success rate in realistic simulation environments. Experimental results reveal that the Heuristic search achieves the fastest planning times (0.28 ms) and a 100% success rate, while the A* approach delivers nearoptimal performance, and the DQN model, despite its adaptability, incurs longer planning delays and occasional suboptimal routing. These results highlight the advantages of deterministic rulebased methods in geometrically constrained crop-row environments and lay the groundwork for future hybrid strategies in precision agriculture. Keywords: Path planning, autonomous control, crop rows, autonomous nitrate sampling 1. INTRODUCTION Autonomous navigation in agricultural fields is challenging due to structured layouts with unstructured variability.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
Autonomous Navigation of 4WIS4WID Agricultural Field Mobile Robot using Deep Reinforcement Learning
Baby, Tom, Gohil, Mahendra Kumar, Bhattacharya, Bishakh
In the futuristic agricultural fields compatible with Agriculture 4.0, robots are envisaged to navigate through crops to perform functions like pesticide spraying and fruit harvesting, which are complex tasks due to factors such as non-geometric internal obstacles, space constraints, and outdoor conditions. In this paper, we attempt to employ Deep Reinforcement Learning (DRL) to solve the problem of 4WIS4WID mobile robot navigation in a structured, automated agricultural field. This paper consists of three sections: parameterization of four-wheel steering configurations, crop row tracking using DRL, and autonomous navigation of 4WIS4WID mobile robot using DRL through multiple crop rows. We show how to parametrize various configurations of four-wheel steering to two variables. This includes symmetric four-wheel steering, zero-turn, and an additional steering configuration that allows the 4WIS4WID mobile robot to move laterally. Using DRL, we also followed an irregularly shaped crop row with symmetric four-wheel steering. In the multiple crop row simulation environment, with the help of waypoints, we effectively performed point-to-point navigation. Finally, a comparative analysis of various DRL algorithms that use continuous actions was carried out.
- Asia > India (0.14)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
CropNav: a Framework for Autonomous Navigation in Real Farms
Gasparino, Mateus Valverde, Higuti, Vitor Akihiro Hisano, Sivakumar, Arun Narenthiran, Velasquez, Andres Eduardo Baquero, Becker, Marcelo, Chowdhary, Girish
Small robots that can operate under the plant canopy can enable new possibilities in agriculture. However, unlike larger autonomous tractors, autonomous navigation for such under canopy robots remains an open challenge because Global Navigation Satellite System (GNSS) is unreliable under the plant canopy. We present a hybrid navigation system that autonomously switches between different sets of sensing modalities to enable full field navigation, both inside and outside of crop. By choosing the appropriate path reference source, the robot can accommodate for loss of GNSS signal quality and leverage row-crop structure to autonomously navigate. However, such switching can be tricky and difficult to execute over scale. Our system provides a solution by automatically switching between an exteroceptive sensing based system, such as Light Detection And Ranging (LiDAR) row-following navigation and waypoints path tracking. In addition, we show how our system can detect when the navigate fails and recover automatically extending the autonomous time and mitigating the necessity of human intervention. Our system shows an improvement of about 750 m per intervention over GNSS-based navigation and 500 m over row following navigation.
- South America > Brazil (0.04)
- North America > United States > Oregon (0.04)
- North America > United States > North Dakota > Williams County (0.04)
- North America > United States > Illinois > Champaign County > Champaign (0.04)
Learning to Turn: Diffusion Imitation for Robust Row Turning in Under-Canopy Robots
Sivakumar, Arun N., Thangeda, Pranay, Fang, Yixiao, Gasparino, Mateus V., Cuaran, Jose, Ornik, Melkior, Chowdhary, Girish
Under-canopy agricultural robots require robust navigation capabilities to enable full autonomy but struggle with tight row turning between crop rows due to degraded GPS reception, visual aliasing, occlusion, and complex vehicle dynamics. We propose an imitation learning approach using diffusion policies to learn row turning behaviors from demonstrations provided by human operators or privileged controllers. Simulation experiments in a corn field environment show potential in learning this task with only visual observations and velocity states. However, challenges remain in maintaining control within rows and handling varied initial conditions, highlighting areas for future improvement.
SPARROW: Smart Precision Agriculture Robot for Ridding of Weeds
Balasingham, Dhanushka, Samarathunga, Sadeesha, Arachchige, Gayantha Godakanda, Bandara, Anuththara, Wellalage, Sasini, Pandithage, Dinithi, Hansika, Mahaadikara M. D. J. T, de Silva, Rajitha
The advancements in precision agriculture are vital to support the increasing demand for global food supply. Precision spot spraying is a major step towards reducing chemical usage for pest and weed control in agriculture. A novel spot spraying algorithm that autonomously detects weeds and performs trajectory planning for the sprayer nozzle has been proposed. Furthermore, this research introduces a vision-based autonomous navigation system that operates through the detected crop row, effectively synchronizing with an autonomous spraying algorithm. This proposed system is characterized by its cost effectiveness that enable the autonomous spraying of herbicides onto detected weeds.
- Asia > Sri Lanka (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > Jamaica (0.04)
- (4 more...)
- Materials > Chemicals > Agricultural Chemicals (0.55)
- Food & Agriculture > Agriculture > Pest Control (0.41)
LiDAR-Based Crop Row Detection Algorithm for Over-Canopy Autonomous Navigation in Agriculture Fields
Liu, Ruiji, Yandun, Francisco, Kantor, George
Autonomous navigation is crucial for various robotics applications in agriculture. However, many existing methods depend on RTK-GPS systems, which are expensive and susceptible to poor signal coverage. This paper introduces a state-of-the-art LiDAR-based navigation system that can achieve over-canopy autonomous navigation in row-crop fields, even when the canopy fully blocks the interrow spacing. Our crop row detection algorithm can detect crop rows across diverse scenarios, encompassing various crop types, growth stages, weeds presence, and discontinuities within the crop rows. Without utilizing the global localization of the robot, our navigation system can perform autonomous navigation in these challenging scenarios, detect the end of the crop rows, and navigate to the next crop row autonomously, providing a crop-agnostic approach to navigate the whole row-crop field. This navigation system has undergone tests in various simulated agricultural fields, achieving an average of 2.98cm autonomous driving accuracy without human intervention on the custom Amiga robot. In addition, the qualitative results of our crop row detection algorithm from the actual soybean fields validate our LiDAR-based crop row detection algorithm's potential for practical agricultural applications.
- North America > United States > Pennsylvania > Allegheny County > Pittsburgh (0.04)
- North America > United States > Iowa (0.04)
Crop Row Switching for Vision-Based Navigation: A Comprehensive Approach for Efficient Crop Field Navigation
de Silva, Rajitha, Cielniak, Grzegorz, Gao, Junfeng
Vision-based mobile robot navigation systems in arable fields are mostly limited to in-row navigation. The process of switching from one crop row to the next in such systems is often aided by GNSS sensors or multiple camera setups. This paper presents a novel vision-based crop row-switching algorithm that enables a mobile robot to navigate an entire field of arable crops using a single front-mounted camera. The proposed row-switching manoeuvre uses deep learning-based RGB image segmentation and depth data to detect the end of the crop row, and re-entry point to the next crop row which would be used in a multi-state row switching pipeline. Each state of this pipeline use visual feedback or wheel odometry of the robot to successfully navigate towards the next crop row. The proposed crop row navigation pipeline was tested in a real sugar beet field containing crop rows with discontinuities, varying light levels, shadows and irregular headland surfaces. The robot could successfully exit from one crop row and re-enter the next crop row using the proposed pipeline with absolute median errors averaging at 19.25 cm and 6.77{\deg} for linear and rotational steps of the proposed manoeuvre.
- North America > United States > North Dakota > Williams County (0.25)
- North America > United States > Texas > Ellis County (0.04)
- Europe > United Kingdom > England > Lincolnshire > Lincoln (0.04)
Deep learning-based Crop Row Detection for Infield Navigation of Agri-Robots
de Silva, Rajitha, Cielniak, Grzegorz, Wang, Gang, Gao, Junfeng
Autonomous navigation in agricultural environments is challenged by varying field conditions that arise in arable fields. State-of-the-art solutions for autonomous navigation in such environments require expensive hardware such as RTK-GNSS. This paper presents a robust crop row detection algorithm that withstands such field variations using inexpensive cameras. Existing datasets for crop row detection does not represent all the possible field variations. A dataset of sugar beet images was created representing 11 field variations comprised of multiple grow stages, light levels, varying weed densities, curved crop rows and discontinuous crop rows. The proposed pipeline segments the crop rows using a deep learning-based method and employs the predicted segmentation mask for extraction of the central crop using a novel central crop row selection algorithm. The novel crop row detection algorithm was tested for crop row detection performance and the capability of visual servoing along a crop row. The visual servoing-based navigation was tested on a realistic simulation scenario with the real ground and plant textures. Our algorithm demonstrated robust vision-based crop row detection in challenging field conditions outperforming the baseline.
- North America > United States > Kansas > Cowley County (0.24)
- North America > United States > North Dakota > Williams County (0.04)
- Asia > China > Beijing > Beijing (0.04)
- (13 more...)